Correlated topic modeling has been limited to small model and problem sizesdue to their high computational cost and poor scaling. In this paper, wepropose a new model which learns compact topic embeddings and captures topiccorrelations through the closeness between the topic vectors. Our methodenables efficient inference in the low-dimensional embedding space, reducingprevious cubic or quadratic time complexity to linear w.r.t the topic size. Wefurther speedup variational inference with a fast sampler to exploit sparsityof topic occurrence. Extensive experiments show that our approach is capable ofhandling model and data scales which are several orders of magnitude largerthan existing correlation results, without sacrificing modeling quality byproviding competitive or superior performance in document classification andretrieval.
展开▼